Neural Networks

Fred D.Jordan, webmaster@rccad.com


CLICK HERE TO VISIT THE TOP 1000!

Application of recurrent network to time-dependent applications

The following presents results of recurrent neural networks simulations using a supercomputer Cray T3D. The basic idea was to train a network which would have both high number of feed-back loops and neurons with temporal transfer function.

The first requirement has been achieved using a toroidal topology for the network, with each neuron connected to its 8 closest neighbours. The second requirement was solved with a neuron having the same transfer function as a RC electrical circuit which enables it to integrate information through time. The neuron is looped on itself with a connection which can be freely weighted, thus providing a mean to adjust the stability of a single neuron. The minimum number of neurons to create oscillations it then 2. (Transfer function of order 2). The output of each neuron is classicaly tresholded with classical sigmoid function. The resulting system is of incredible complexity since it is a non-linear high order system. We made simulations with a number of neurons varying from 16 to 10000. The interest of the system lies essentially in its ability to generate any kind of signal ranging from constant to chaotic. With correctly tuned weights, the system is then potentially able to synthetize any temporal output in response to any temporal input.

Several techniques have been used for learning. Best results have been obtained with an algorithm based on simulated annealing. We have achieved to simulate two kind of functions:

  1. Classification of a temporal signal. This has been used to classify seismic signals.
  2. Generation of commands for a robot arm have also been sucessfully done.

More details can be found in:

Jordan, Frederic; Neural Crystal: A recurrent network for temporal applications, International Conference on Systems, Man and Cybernetic,San Antonio, Publ. by IEEE.

Following pictures are taken from a video we made which shows the relaxation process of a 10000 neurons network. The 2 first frames shows 2 stages of the relaxation of a globally instable network. The 2 following ones shows the reaction of a stable network to successive temporal pulses. The goal of these simulations was to estimate the propagation speed of information inside the network. Because of the size of such network, it was impossible to make any learning. The last picture just shows behavior of network with randomly chosen connections. Understanding of such temporal network could open fascinating interests in future. It does not seem that any power of computing existing on the earth is today able to deal with learning of such systems.


Symmetries in Error Hypersurface of Multilayered Networks

I have started my work in neural networks as a pure hobby. I have concentrated first on Multilayered NN. In particular, I was interested by relations between error surface shape on topological structure of the network. It can appear a bit abstract like this, so let's consider the following example: consider a 3 layered network with 2 neurons on hidden layer. For a given set of weights, this network has a given transfer function. Now, let's swap the 2 neurons of the hidden layer...yes, the transfer function does not change. Starting from this simple remark, one can formalize using the concept of Coherent Transformations:

These are, like swapping, operations which modify something in the network without changing anything to its transfer function. I found only 2 of these: swapping and sign inverting before and after the neuron.

If you now use a matrix formalism to express the relaxation operation and coherent transformation morphisms, you can demonstrate very interesting properties of the error surface. In particular, it is possible to show that the error surface presents some symmetry properties and where are located the symmetry planes. It appears that the error surface can then be folded several times to a kind of kernel error surface which contains all the transfer function a given network can generate. Notice that this means that in general, when people are doing a regular backpropagation, they are in fact walking on the total error surface.which is totally useless since it presents some symmetries ! I have also shown that these symmetries can even be dangerous for gradient technique since the slope in some directions of the hypersurface goes back to 0.

More informations can be found in following publication:

Jordan, Frederic; Clement, Guillaume, Using the symmetries of a multi-layered network to reduce the weight space, International Joint Conference on Neural Networks - IJCNN-, Seattle, WA, USA Conference Date: 1991 Jul 8-12, IJCNN - International Joint Conference on Neural Networks Int Jt Conf Neural Networks IJCNN 91 Seattle. Publ by IEEE.


The Neuro Ring's Previous Website
This Neuro Ring site is owned by

Email me your comments!.

[ Next Site | Previous Site | Next 5 | Problems? ]

Want to apply for membership?

The Neuro Ring's Next Website